1,485 research outputs found

    MORE THAN SMART: A Framework to Make the Distribution Grid More Open, Efficient and Resilient

    Get PDF
    This paper is the result of a series of workshops with industry, government and nonprofit leaders focused on helping guide future utility investments and planning for a new distributed generation system. The distributed grid is the final stage in the delivery of electric power linking electricity sub-stations to customers. To date, no state has initiated a comprehensive effort that includes the planning, design-build and operational requirements for large scale integration of DER into state-wide distributed generation systems. This paper provides a framework and guiding principles for how to initiate such a system and can be used to implement California law AB 327 passed in 2013 requiring investor owned utilities to submit a DER plan to the CPUC by July 2015 that identifies their optimal deployment locations

    Unsupervised Place Recognition with Deep Embedding Learning over Radar Videos

    Full text link
    We learn, in an unsupervised way, an embedding from sequences of radar images that is suitable for solving place recognition problem using complex radar data. We experiment on 280 km of data and show performance exceeding state-of-the-art supervised approaches, localising correctly 98.38% of the time when using just the nearest database candidate.Comment: to be presented at the Workshop on Radar Perception for All-Weather Autonomy at the IEEE International Conference on Robotics and Automation (ICRA) 202

    Point-based metric and topological localisation between lidar and overhead imagery

    Get PDF
    In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied

    Integrated Distributed Energy Resource Pricing and Control

    Get PDF
    U.S. policy is to allow owners of distributed resources to effectively and reliably provide their services at scale, and operate harmoniously on an interconnected distribution and transmission grid. Accordingly, regulation, new business models and technology advances over the past decade have led to significant growth rates in distributed energy resources including generation, responsive demand, energy conservation and customer adoption of industrial, commercial and residential energy management systems. The result is that several regions are reaching proposed capacity levels for distributed generation that exceed traditional operating and engineering practices for distribution systems. At the same time, policies advocating wholesale spot prices to customer devices (“prices to devices”) have not adequately considered distribution system reliability impacts or relationship to distributed generation. As such, it is also not clear that current market models or regulations are entirely adequate or appropriate for the several emerging hybrid regional markets, such as California, with millions of distributed energy resources envision by the year 2020

    The Oxford Road Boundaries Dataset

    Full text link
    In this paper we present the Oxford Road Boundaries Dataset, designed for training and testing machine-learning-based road-boundary detection and inference approaches. We have hand-annotated two of the 10 km-long forays from the Oxford Robotcar Dataset and generated from other forays several thousand further examples with semi-annotated road-boundary masks. To boost the number of training samples in this way, we used a vision-based localiser to project labels from the annotated datasets to other traversals at different times and weather conditions. As a result, we release 62605 labelled samples, of which 47639 samples are curated. Each of these samples contains both raw and classified masks for left and right lenses. Our data contains images from a diverse set of scenarios such as straight roads, parked cars, junctions, etc. Files for download and tools for manipulating the labelled data are available at: oxford-robotics-institute.github.io/road-boundaries-datasetComment: Accepted for publication at the workshop "3D-DLAD: 3D-Deep Learning for Autonomous Driving" (WS15), Intelligent Vehicles Symposium (IV 2021

    Doppler-aware Odometry from FMCW Scanning Radar

    Full text link
    This work explores Doppler information from a millimetre-Wave (mm-W) Frequency-Modulated Continuous-Wave (FMCW) scanning radar to make odometry estimation more robust and accurate. Firstly, doppler information is added to the scan masking process to enhance correlative scan matching. Secondly, we train a Neural Network (NN) for regressing forward velocity directly from a single radar scan; we fuse this estimate with the correlative scan matching estimate and show improved robustness to bad estimates caused by challenging environment geometries, e.g. narrow tunnels. We test our method with a novel custom dataset which is released with this work at https://ori.ox.ac.uk/publications/datasets.Comment: Accepted to ITSC 202

    RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW Radar

    Full text link
    This paper presents an efficient annotation procedure and an application thereof to end-to-end, rich semantic segmentation of the sensed environment using FMCW scanning radar. We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions. We avoid laborious manual labelling by exploiting the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors, for which semantic segmentation is an already consolidated procedure. The training procedure leverages a state-of-the-art natural image segmentation system which is publicly available and as such, in contrast to previous approaches, allows for the production of copious labels for the radar stream by incorporating four camera and two LiDAR streams. Additionally, the losses are computed taking into account labels to the radar sensor horizon by accumulating LiDAR returns along a pose-chain ahead and behind of the current vehicle position. Finally, we present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.Comment: submitted to IEEE Intelligent Vehicles Symposium (IV) 202

    Robot-Relay : Building-Wide, Calibration-Less Visual Servoing with Learned Sensor Handover Network

    Full text link
    We present a system which grows and manages a network of remote viewpoints during the natural installation cycle for a newly installed camera network or a newly deployed robot fleet. No explicit notion of camera position or orientation is required, neither global - i.e. relative to a building plan - nor local - i.e. relative to an interesting point in a room. Furthermore, no metric relationship between viewpoints is required. Instead, we leverage our prior work in effective remote control without extrinsic or intrinsic calibration and extend it to the multi-camera setting. In this, we memorise, from simultaneous robot detections in the tracker thread, soft pixel-wise topological connections between viewpoints. We demonstrate our system with repeated autonomous traversals of workspaces connected by a network of six cameras across a productive office environment.Comment: Paper accepted to the 18th International Symposium on Experimental Robotics (ISER 2023

    Active Galactic Nuclei in Groups and Clusters of Galaxies: Detection and Host Morphology

    Get PDF
    The incidence and properties of Active Galactic Nuclei (AGN) in the field, groups, and clusters can provide new information about how these objects are triggered and fueled, similar to how these environments have been employed to study galaxy evolution. We have obtained new XMM-Newton observations of seven X-ray selected groups and poor clusters with 0.02 < z < 0.06 for comparison with previous samples that mostly included rich clusters and optically-selected groups. Our final sample has ten groups and six clusters in this low-redshift range (split at a velocity dispersion of σ=500\sigma = 500 km/s). We find that the X-ray selected AGN fraction increases from fA(LX>1041;MR<MR+1)=0.0470.016+0.023f_A(L_X>10^{41}; M_R<M_R^*+1) = 0.047^{+0.023}_{-0.016} in clusters to 0.0910.034+0.0490.091^{+0.049}_{-0.034} for the groups (85% significance), or a factor of two, for AGN above an 0.3-8keV X-ray luminosity of 104110^{41} erg/s hosted by galaxies more luminous than MR+1M_R^*+1. The trend is similar, although less significant, for a lower-luminosity host threshold of MR=20M_R = -20 mag. For many of the groups in the sample we have also identified AGN via standard emission-line diagnostics and find that these AGN are nearly disjoint from the X-ray selected AGN. Because there are substantial differences in the morphological mix of galaxies between groups and clusters, we have also measured the AGN fraction for early-type galaxies alone to determine if the differences are directly due to environment, or indirectly due to the change in the morphological mix. We find that the AGN fraction in early-type galaxies is also lower in clusters fA,n>2.5(LX>1041;MR<MR+1)=0.0480.019+0.028f_{A,n>2.5}(L_X>10^{41}; M_R<M_R^*+1) = 0.048^{+0.028}_{-0.019} compared to 0.1190.044+0.0640.119^{+0.064}_{-0.044} for the groups (92% significance), a result consistent with the hypothesis that the change in AGN fraction is directly connected to environment.Comment: 18 pages, 9 figures; accepted by The Astrophysical Journal; for higher-resolution versions of some figures, see http://u.arizona.edu/~tjarnold/Arnold09

    RSL-Net: Localising in Satellite Images From a Radar on the Ground

    Full text link
    This paper is about localising a vehicle in an overhead image using FMCW radar mounted on a ground vehicle. FMCW radar offers extraordinary promise and efficacy for vehicle localisation. It is impervious to all weather types and lighting conditions. However the complexity of the interactions between millimetre radar wave and the physical environment makes it a challenging domain. Infrastructure-free large-scale radar-based localisation is in its infancy. Typically here a map is built and suitable techniques, compatible with the nature of sensor, are brought to bear. In this work we eschew the need for a radar-based map; instead we simply use an overhead image -- a resource readily available everywhere. This paper introduces a method that not only naturally deals with the complexity of the signal type but does so in the context of cross modal processing.Comment: Accepted to IEEE Robotics and Automation Letters (RA-L
    corecore